6 research outputs found

    Efficiency Matters: Speeding Up Automated Testing with GUI Rendering Inference

    Full text link
    Due to the importance of Android app quality assurance, many automated GUI testing tools have been developed. Although the test algorithms have been improved, the impact of GUI rendering has been overlooked. On the one hand, setting a long waiting time to execute events on fully rendered GUIs slows down the testing process. On the other hand, setting a short waiting time will cause the events to execute on partially rendered GUIs, which negatively affects the testing effectiveness. An optimal waiting time should strike a balance between effectiveness and efficiency. We propose AdaT, a lightweight image-based approach to dynamically adjust the inter-event time based on GUI rendering state. Given the real-time streaming on the GUI, AdaT presents a deep learning model to infer the rendering state, and synchronizes with the testing tool to schedule the next event when the GUI is fully rendered. The evaluations demonstrate the accuracy, efficiency, and effectiveness of our approach. We also integrate our approach with the existing automated testing tool to demonstrate the usefulness of AdaT in covering more activities and executing more events on fully rendered GUIs.Comment: Proceedings of the 45th International Conference on Software Engineerin

    Towards Benchmarking GUI Compatibility Testing on Mobile Applications

    Full text link
    GUI is a bridge connecting user and application. Existing GUI testing tasks can be categorized into two groups: functionality testing and compatibility testing. While the functionality testing focuses on detecting application runtime bugs, the compatibility testing aims at detecting bugs resulting from device or platform difference. To automate testing procedures and improve testing efficiency, previous works have proposed dozens of tools. To evaluate these tools, in functionality testing, researchers have published testing benchmarks. Comparatively, in compatibility testing, the question of ``Do existing methods indeed effectively assist test cases replay?'' is not well answered. To answer this question and advance the related research in GUI compatibility testing, we propose a benchmark of GUI compatibility testing. In our experiments, we compare the replay success rate of existing tools. Based on the experimental results, we summarize causes which may lead to ineffectiveness in test case replay and propose opportunities for improving the state-of-the-art
    corecore